# Lightweight Instruction Model
Doge 120M MoE Instruct
Apache-2.0
The Doge model employs dynamic masked attention mechanisms for sequence transformation and can use multi-layer perceptrons or cross-domain mixture of experts for state transitions.
Large Language Model
Transformers English

D
SmallDoge
240
1
Qwen2.5 0.5B Instruct
Apache-2.0
A 0.5B parameter instruction fine-tuned model designed for the Gensyn reinforcement learning group, supporting local fine-tuning training
Large Language Model
Transformers English

Q
Gensyn
2.4M
5
Featured Recommended AI Models